OpenClaw’s Skill-Scanning System Fails to Prevent Malicious AI Plugins, Security Experts Warn
Security researchers have exposed critical vulnerabilities in OpenClaw's skill-scanning system, revealing its inability to reliably prevent malicious AI agent plugins. The platform's reliance on VirusTotal and internal moderation fails to establish a secure boundary for third-party skills, particularly those handling sensitive blockchain interactions.
OpenClaw's architecture allows locally deployed agents to inherit system-level access, creating attack vectors for wallet exploits and on-chain manipulations. This flaw emerges as AI agent adoption accelerates across crypto ecosystems, compounding existing risks from unvetted third-party services.
The scanning system's inconsistent verdicts—where VirusTotal flags skills as suspicious while OpenClaw clears them—leave users vulnerable to social engineering attacks. This security gap threatens decentralized applications and smart contract platforms that integrate with AI agent ecosystems.